Use of Static Surrogates in Hyperparameter Optimization
نویسندگان
چکیده
Optimizing the hyperparameters and architecture of a neural network is long yet necessary phase in most applications. This consuming process can benefit from strategies designed to discard low-quality configurations quickly focus on more promising candidates. work aims at enhancing HyperNOMAD, library that adapts direct search derivative-free optimization algorithm tune both training simultaneously. Two static surrogates are developed trigger an early stopping during configuration evaluation strategically rank pool These additions HyperNOMAD shown reduce its resource consumption by orders magnitude without harming quality proposed solutions.
منابع مشابه
Efficient Benchmarking of Hyperparameter Optimizers via Surrogates
Hyperparameter optimization is crucial for achieving peak performance with many machine learning algorithms; however, the evaluation of new optimization techniques on real-world hyperparameter optimization problems can be very expensive. Therefore, experiments are often performed using cheap synthetic test functions with characteristics rather different from those of real benchmarks of interest...
متن کاملEfficient Hyperparameter Optimization for Deep Learning Algorithms Using Deterministic RBF Surrogates
Automatically searching for optimal hyperparameter configurations is of crucial importance for applying deep learning algorithms in practice. Recently, Bayesian optimization has been proposed for optimizing hyperparameters of various machine learning algorithms. Those methods adopt probabilistic surrogate models like Gaussian processes to approximate and minimize the validation error function o...
متن کاملPractical Hyperparameter Optimization
Recently, the bandit-based strategy Hyperband (HB) was shown to yield good hyperparameter settings of deep neural networks faster than vanilla Bayesian optimization (BO). However, for larger budgets, HB is limited by its random search component, and BO works better. We propose to combine the benefits of both approaches to obtain a new practical state-of-the-art hyperparameter optimization metho...
متن کاملHyperparameter Optimization: A Spectral Approach
We give a simple, fast algorithm for hyperparameter optimization inspired by techniques from the analysis of Boolean functions. We focus on the high-dimensional regime where the canonical example is training a neural network with a large number of hyperparameters. The algorithm– an iterative application of compressed sensing techniques for orthogonal polynomials– requires only uniform sampling ...
متن کاملHyperparameter optimization with approximate gradient
Most models in machine learning contain at least one hyperparameter to control for model complexity. Choosing an appropriate set of hyperparameters is both crucial in terms of model accuracy and computationally challenging. In this work we propose an algorithm for the optimization of continuous hyperparameters using inexact gradient information. An advantage of this method is that hyperparamete...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Operations Research Forum
سال: 2022
ISSN: ['2662-2556']
DOI: https://doi.org/10.1007/s43069-022-00128-w